In the wireless Federated Learning (FL) architecture, the model parameter data need to be continuously exchanged between the client and the server to update the model, thus causing a large communication overhead and power consumption on the client. At present, there are many methods to reduce communication overhead by data quantization and data sparseness. In order to further reduce the communication overhead, a wireless FL algorithm based on 1?bit compressive sensing was proposed. In the uplink of wireless FL architecture, the data update parameters of its local model, including update amplitude and trend, were firstly recorded on the client. Then, sparsification was performed to the amplitude and trend information, and the threshold required for updating was determined. Finally, 1?bit compressive sensing was performed on the update trend information, thereby compressing the uplink data. On this basis, the data size was further compressed by setting dynamic threshold. Experimental results on MNIST datasets show that the 1?bit compressive sensing process with the introduction of dynamic threshold can achieve the same results as the lossless transmission process, and reduce the amount of model parameter data to be transmitted by the client during the uplink communication of FL applications to 1/25 of the normal FL process without this method; and can reduce the total user upload data size to 2/11 of the original size and reduce the transmission energy consumption to 1/10 of the original size when the global model is trained to the same level.